Prominent AI folk like Hans Moravec and Marvin Minsky had predicted the eclipse of humanity and humane values (save perhaps as pets/specimens/similar, losing the overwhelming majority of the future) long before Yudkowsky.
A minor point maybe, but...how big is the fraction of all AI researchers and computer scientists who fall into that category?
I.J. Good, Marcus Hutter, Jurgen Schmidhuber, Kevin Warwick, Stephen Omohundro, Vinge etc.
Those are really a handful of names. And their practically useful accomplishments are little. Most AI researcher would consider them dreamers.
Yudkowsky has spent more time on the topic than any of the others on this list,
This is frequently mentioned but bears little evidence. Many smart people like Roger Penrose spent a lot of time on their pet theories. That does not validate them. It just allowed them to find better ways to rationalize their ideas.
I.J. Good, Marcus Hutter, Jurgen Schmidhuber, Kevin Warwick, Stephen Omohundro, Vinge etc.
Those are really a handful of names. And their practically useful accomplishments are little. Most AI researcher would consider them dreamers.
Good was involved in the very early end of computers, so it is a bit hard for him to have done modern AI work. But the work he did do was pretty impressive. He did cryptography work in World War II with Alan Turing, and both during and after the war worked on both theoretical and practical computer systems. He did a lot of probability work, much of which is in some form or another used today in a variety of fields including AI. For example, look at the Good-Turing estimator.
Schmidhuber did some of the first work on practical genetic algorithms and did very important work on neural nets.
Warwick has done so much work in AI and robotics that listing them all would take a long time. One can argue that most of it hasn’t gone outside the lab, but it is clear that the much of that work is practically useful even if it is not yet economically feasible to use it on a large scale (which frankly is the status of most AI research at this point in general).
Overall, I don’t think your characterization is accurate, although your point that the total set of AI researchers with such concerns being a small percentage of all researchers seems valid.
Prominent AI folk like Hans Moravec and Marvin Minsky had predicted the eclipse of humanity and humane values (save perhaps as pets/specimens/similar, losing the overwhelming majority of the future) long before Yudkowsky.
A minor point maybe, but...how big is the fraction of all AI researchers and computer scientists who fall into that category?
I.J. Good, Marcus Hutter, Jurgen Schmidhuber, Kevin Warwick, Stephen Omohundro, Vinge etc.
A strange list, IMO. Fredkin was one who definitely entertained the “pets” hypothesis:
It was rumoured in some of the UK national press of the time that Margaret Thatcher watched Professor Fredkin being interviewed on a late night TV science programme. Fredkin explained that superintelligent machines were destined to surpass the human race in intelligence quite soon, and that if we were lucky they might find human beings interesting enough to keep us around as pets.
It was generated by selecting (some) people who had written publications in the area, not merely oral statements. Broadening to include the latter would catch many more folk.
I was thinking of “Robot: from mere machine to transcendent mind” where he talks about an era in which humans survive through local tame robots, but eventually are devoured by competitive minds that have escaped beyond immediate control.
A minor point maybe, but...how big is the fraction of all AI researchers and computer scientists who fall into that category?
I.J. Good, Marcus Hutter, Jurgen Schmidhuber, Kevin Warwick, Stephen Omohundro, Vinge etc.
Those are really a handful of names. And their practically useful accomplishments are little. Most AI researcher would consider them dreamers.
This is frequently mentioned but bears little evidence. Many smart people like Roger Penrose spent a lot of time on their pet theories. That does not validate them. It just allowed them to find better ways to rationalize their ideas.
Good was involved in the very early end of computers, so it is a bit hard for him to have done modern AI work. But the work he did do was pretty impressive. He did cryptography work in World War II with Alan Turing, and both during and after the war worked on both theoretical and practical computer systems. He did a lot of probability work, much of which is in some form or another used today in a variety of fields including AI. For example, look at the Good-Turing estimator.
Schmidhuber did some of the first work on practical genetic algorithms and did very important work on neural nets.
Warwick has done so much work in AI and robotics that listing them all would take a long time. One can argue that most of it hasn’t gone outside the lab, but it is clear that the much of that work is practically useful even if it is not yet economically feasible to use it on a large scale (which frankly is the status of most AI research at this point in general).
Overall, I don’t think your characterization is accurate, although your point that the total set of AI researchers with such concerns being a small percentage of all researchers seems valid.
A strange list, IMO. Fredkin was one who definitely entertained the “pets” hypothesis:
http://www.dai.ed.ac.uk/homes/cam/Robots_Wont_Rule2.shtml#Bad
It was generated by selecting (some) people who had written publications in the area, not merely oral statements. Broadening to include the latter would catch many more folk.
Your list of Hans Moravec and Marvin Minsky was fine—though I believe Moravec characterised humans being eaten by robots as follows:
...though he did go on to say thay he was “not too bothered” by that because “in the long run, that’s how it’s going to be anyway”.
I was more complaining about XiXiDu’s “reframing” of the list.
I was thinking of “Robot: from mere machine to transcendent mind” where he talks about an era in which humans survive through local tame robots, but eventually are devoured by competitive minds that have escaped beyond immediate control.